Rowan County
Efficient Dynamic Clustering-Based Document Compression for Retrieval-Augmented-Generation
Li, Weitao, Liu, Kaiming, Zhang, Xiangyu, Lei, Xuanyu, Ma, Weizhi, Liu, Yang
Retrieval-Augmented Generation (RAG) has emerged as a widely adopted approach for knowledge injection during large language model (LLM) inference in recent years. However, due to their limited ability to exploit fine-grained inter-document relationships, current RAG implementations face challenges in effectively addressing the retrieved noise and redundancy content, which may cause error in the generation results. To address these limitations, we propose an Efficient Dynamic Clustering-based document Compression framework (EDC2-RAG) that utilizes latent inter-document relationships while simultaneously removing irrelevant information and redundant content. We validate our approach, built upon GPT-3.5-Turbo and GPT-4o-mini, on widely used knowledge-QA and Hallucination-Detection datasets. Experimental results show that our method achieves consistent performance improvements across various scenarios and experimental settings, demonstrating strong robustness and applicability. Our code and datasets are available at https://github.com/Tsinghua-dhy/EDC-2-RAG.
- Asia > Singapore (0.05)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Asia > China > Beijing > Beijing (0.04)
- (9 more...)
HalluShift: Measuring Distribution Shifts towards Hallucination Detection in LLMs
Dasgupta, Sharanya, Nath, Sujoy, Basu, Arkaprabha, Shamsolmoali, Pourya, Das, Swagatam
Large Language Models (LLMs) have recently garnered widespread attention due to their adeptness at generating innovative responses to the given prompts across a multitude of domains. However, LLMs often suffer from the inherent limitation of hallucinations and generate incorrect information while maintaining well-structured and coherent responses. In this work, we hypothesize that hallucinations stem from the internal dynamics of LLMs. Our observations indicate that, during passage generation, LLMs tend to deviate from factual accuracy in subtle parts of responses, eventually shifting toward misinformation. This phenomenon bears a resemblance to human cognition, where individuals may hallucinate while maintaining logical coherence, embedding uncertainty within minor segments of their speech. To investigate this further, we introduce an innovative approach, HalluShift, designed to analyze the distribution shifts in the internal state space and token probabilities of the LLM-generated responses. Our method attains superior performance compared to existing baselines across various benchmark datasets. Our codebase is available at https://github.com/sharanya-dasgupta001/hallushift.
- Asia > India > West Bengal > Kolkata (0.04)
- Asia > China (0.04)
- North America > United States > Kentucky > Rowan County (0.04)
- Research Report > Promising Solution (0.48)
- Overview > Innovation (0.34)
Citation-Enhanced Generation for LLM-based Chatbots
Li, Weitao, Li, Junkai, Ma, Weizhi, Liu, Yang
Large language models (LLMs) exhibit powerful general intelligence across diverse scenarios, including their integration into chatbots. However, a vital challenge of LLM-based chatbots is that they may produce hallucinated content in responses, which significantly limits their applicability. Various efforts have been made to alleviate hallucination, such as retrieval augmented generation and reinforcement learning with human feedback, but most of them require additional training and data annotation. In this paper, we propose a novel post-hoc Citation-Enhanced Generation (CEG) approach combined with retrieval argumentation. Unlike previous studies that focus on preventing hallucinations during generation, our method addresses this issue in a post-hoc way. It incorporates a retrieval module to search for supporting documents relevant to the generated content, and employs a natural language inference-based citation generation module. Once the statements in the generated content lack of reference, our model can regenerate responses until all statements are supported by citations. Note that our method is a training-free plug-and-play plugin that is capable of various LLMs. Experiments on various hallucination-related datasets show our framework outperforms state-of-the-art methods in both hallucination detection and response regeneration on three benchmarks. Our codes and dataset will be publicly available.
- Asia > Singapore (0.05)
- Oceania > Australia (0.05)
- North America > United States > New York (0.04)
- (8 more...)
- Research Report > New Finding (0.46)
- Research Report > Promising Solution (0.34)
- Transportation > Passenger (1.00)
- Transportation > Air (1.00)
- Consumer Products & Services > Travel (0.70)
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
Li, Junyi, Cheng, Xiaoxue, Zhao, Wayne Xin, Nie, Jian-Yun, Wen, Ji-Rong
Large language models (LLMs), such as ChatGPT, are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge. To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation benchmark for Large Language Models (HaluEval), a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination. To generate these samples, we propose a ChatGPT-based two-step framework, i.e., sampling-then-filtering. Besides, we also hire some human labelers to annotate the hallucinations in ChatGPT responses. The empirical results suggest that ChatGPT is likely to generate hallucinated content in specific topics by fabricating unverifiable information (i.e., about $19.5\%$ responses). Moreover, existing LLMs face great challenges in recognizing the hallucinations in texts. However, our experiments also prove that providing external knowledge or adding reasoning steps can help LLMs recognize hallucinations. Our benchmark can be accessed at https://github.com/RUCAIBox/HaluEval.
- Asia > India (0.05)
- Asia > Middle East > Republic of Türkiye > Batman Province > Batman (0.05)
- Asia > China > Beijing > Beijing (0.04)
- (12 more...)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- Banking & Finance (0.69)
- Government (0.68)